61 research outputs found

    The Unfolded Protein Response and Autophagy as Drug Targets in Neuropsychiatric Disorders

    Get PDF
    Neurons are polarized in structure with a cytoplasmic compartment extending into dendrites and a long axon that terminates at the synapse. The high level of compartmentalization imposes specific challenges for protein quality control in neurons making them vulnerable to disturbances that may lead to neurological dysfunctions including neuropsychiatric diseases. Synapse and dendrites undergo structural modulations regulated by neuronal activity involve key proteins requiring strict control of their turnover rates and degradation pathways. Recent advances in the study of the unfolded protein response (UPR) and autophagy processes have brought novel insights into the specific roles of these processes in neuronal physiology and synaptic signaling. In this review, we highlight recent data and concepts about UPR and autophagy in neuropsychiatric disorders and synaptic plasticity including a brief outline of possible therapeutic approaches to influence UPR and autophagy signaling in these diseases.Peer reviewe

    Estimating position & velocity in 3D space from monocular video sequences using a deep neural network

    Get PDF
    This work describes a regression model based on Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) networks for tracking objects from monocular video sequences. The target application being pursued is Vision-Based Sensor Substitution (VBSS). In particular, the tool-tip position and velocity in 3D space of a pair of surgical robotic instruments (SRI) are estimated for three surgical tasks, namely suturing, needle-passing and knot-tying. The CNN extracts features from individual video frames and the LSTM network processes these features over time and continuously outputs a 12-dimensional vector with the estimated position and velocity values. A series of analyses and experiments are carried out in the regression model to reveal the benefits and drawbacks of different design choices. First, the impact of the loss function is investigated by adequately weighing the Root Mean Squared Error (RMSE) and Gradient Difference Loss (GDL), using the VGG16 neural network for feature extraction. Second, this analysis is extended to a Residual Neural Network designed for feature extraction, which has fewer parameters than the VGG16 model, resulting in a reduction of ~96.44 % in the neural network size. Third, the impact of the number of time steps used to model the temporal information processed by the LSTM network is investigated. Finally, the capability of the regression model to generalize to the data related to "unseen" surgical tasks (unavailable in the training set) is evaluated. The aforesaid analyses are experimentally validated on the public dataset JIGSAWS. These analyses provide some guidelines for the design of a regression model in the context of VBSS, specifically when the objective is to estimate a set of 1D time series signals from video sequences.Peer ReviewedPostprint (author's final draft

    Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model

    Get PDF
    Providing force feedback as a feature in current Robot-Assisted Minimally Invasive Surgery systems still remains a challenge. In recent years, Vision-Based Force Sensing (VBFS) has emerged as a promising approach to address this problem. Existing methods have been developed in a Supervised Learning (SL) setting. Nonetheless, most of the video sequences related to robotic surgery are not provided with ground-truth force data, which can be easily acquired in a controlled environment. A powerful approach to process unlabeled video sequences and find a compact representation for each video frame relies on using an Unsupervised Learning (UL) method. Afterward, a model trained in an SL setting can take advantage of the available ground-truth force data. In the present work, UL and SL techniques are used to investigate a model in a Semi-Supervised Learning (SSL) framework, consisting of an encoder network and a Long-Short Term Memory (LSTM) network. First, a Convolutional Auto-Encoder (CAE) is trained to learn a compact representation for each RGB frame in a video sequence. To facilitate the reconstruction of high and low frequencies found in images, this CAE is optimized using an adversarial framework and a L1-loss, respectively. Thereafter, the encoder network of the CAE is serially connected with an LSTM network and trained jointly to minimize the difference between ground-truth and estimated force data. Datasets addressing the force estimation task are scarce. Therefore, the experiments have been validated in a custom dataset. The results suggest that the proposed approach is promising.Peer ReviewedPostprint (author's final draft

    A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery

    Get PDF
    Providing force feedback as relevant information in current Robot-Assisted Minimally Invasive Surgery systems constitutes a technological challenge due to the constraints imposed by the surgical environment. In this context, force estimation techniques represent a potential solution, enabling to sense the interaction forces between the surgical instruments and soft-tissues. Specifically, if visual feedback is available for observing soft-tissues’ deformation, this feedback can be used to estimate the forces applied to these tissues. To this end, a force estimation model, based on Convolutional Neural Networks and Long-Short Term Memory networks, is proposed in this work. This model is designed to process both, the spatiotemporal information present in video sequences and the temporal structure of tool data (the surgical tool-tip trajectory and its grasping status). A series of analyses are carried out to reveal the advantages of the proposal and the challenges that remain for real applications. This research work focuses on two surgical task scenarios, referred to as pushing and pulling tissue. For these two scenarios, different input data modalities and their effect on the force estimation quality are investigated. These input data modalities are tool data, video sequences and a combination of both. The results suggest that the force estimation quality is better when both, the tool data and video sequences, are processed by the neural network model. Moreover, this study reveals the need for a loss function, designed to promote the modeling of smooth and sharp details found in force signals. Finally, the results show that the modeling of forces due to pulling tasks is more challenging than for the simplest pushing actions.Peer ReviewedPostprint (author's final draft

    Dynamic Interaction of USP14 with the Chaperone HSC70 Mediates Crosstalk between the Proteasome, ER Signaling, and Autophagy

    Get PDF
    USP14 is a deubiquitinating enzyme associated with the proteasome that is important for protein degradation. Here we show that upon proteasomal inhibition or expression of the mutant W58A38 USP14, association of USP14 with the 19S regulatory particle is disrupted. MS-based interactomics revealed an interaction of USP14 with the chaperone, HSC70 in neuroblastoma cells. Proteasome inhibition enhanced binding of USP14 to HSC70, but also to XBP1u and IRE1α proteins, demonstrating a role in the unfolded protein response. Striatal neurons expressing mutant huntingtin exhibited reduced USP14 and HSC70 levels, whilst inhibition of HSC70 downregulated USP14. Furthermore, proteasome inhibition or the use of mutant W58A-USP14 facilitated the interaction of USP14 with the autophagy protein, GABARAP. Functionally, overexpression of W58A-USP14 increased GABARAP positive autophagosomes in striatal neurons and this was abrogated using the HSC70 inhibitor, VER-155008. Modulation of the USP14-HSC70 axis by various drugs may represent a potential therapeutic target in HD to beneficially influence multiple proteostasis pathwaysPeer reviewe
    • …
    corecore